2025-06-08 17:15:51,986 [ 355631 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:42, check_args_and_update_paths) 2025-06-08 17:15:51,986 [ 355631 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:86, check_args_and_update_paths) 2025-06-08 17:15:51,986 [ 355631 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:97, check_args_and_update_paths) 2025-06-08 17:15:51,986 [ 355631 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:99, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_avp9uj --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=2993bc2bf171 -e DOCKER_KERBERIZED_HADOOP_TAG=ce74919e88f5 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=a2d3dc777d0c -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype test_materialized_mysql_database/test.py::test_materialized_with_column_comments test_materialized_mysql_database/test.py::test_materialized_with_enum test_materialized_mysql_database/test.py::test_multi_table_update test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 'test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0]' 'test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1]' test_materialized_mysql_database/test.py::test_named_collections test_materialized_mysql_database/test.py::test_savepoint_query test_materialized_mysql_database/test.py::test_select_without_columns_5_7 test_materialized_mysql_database/test.py::test_select_without_columns_8_0 test_materialized_mysql_database/test.py::test_system_parts_table test_materialized_mysql_database/test.py::test_system_tables_table test_materialized_mysql_database/test.py::test_table_overrides test_materialized_mysql_database/test.py::test_table_table test_materialized_mysql_database/test.py::test_table_with_indexes test_materialized_mysql_database/test.py::test_text_blob_charset test_materialized_mysql_database/test.py::test_utf8mb4 test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery test_merge_table_over_distributed/test.py::test_filtering test_merge_table_over_distributed/test.py::test_global_in test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path test_move_partition_to_volume_async/test.py::test_async_alter_move test_move_partition_to_volume_async/test.py::test_sync_alter_move test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree test_on_cluster_timeouts/test.py::test_long_query test_overcommit_tracker/test.py::test_user_overcommit 'test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1]' test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed test_postgresql_protocol/test.py::test_java_client test_postgresql_protocol/test.py::test_psql_client test_postgresql_protocol/test.py::test_python_client test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index test_postgresql_replica_database_engine_1/test.py::test_replicating_dml test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_single_transaction test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots test_postgresql_replica_database_engine_1/test.py::test_virtual_columns test_prometheus_endpoint/test.py::test_prometheus_endpoint test_quota/test.py::test_add_remove_interval test_quota/test.py::test_add_remove_quota test_quota/test.py::test_consumption_of_show_clusters test_quota/test.py::test_consumption_of_show_databases test_quota/test.py::test_consumption_of_show_privileges test_quota/test.py::test_consumption_of_show_processlist test_quota/test.py::test_consumption_of_show_tables test_quota/test.py::test_dcl_introspection test_quota/test.py::test_dcl_management test_quota/test.py::test_exceed_quota test_quota/test.py::test_query_inserts test_quota/test.py::test_quota_from_users_xml test_quota/test.py::test_reload_users_xml_by_timer test_quota/test.py::test_simpliest_quota test_quota/test.py::test_tracking_quota -vvv" altinityinfra/integration-tests-runner:9d492c2eec24 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: order-1.0.1, random-0.2, timeout-2.2.0, repeat-0.9.3, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0] test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0] test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list test_quota/test.py::test_add_remove_interval test_postgresql_protocol/test.py::test_java_client test_merge_table_over_distributed/test.py::test_filtering test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_move_partition_to_volume_async/test.py::test_async_alter_move test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication [gw6] [ 1%] PASSED test_merge_table_over_distributed/test.py::test_filtering test_merge_table_over_distributed/test.py::test_global_in [gw6] [ 2%] PASSED test_merge_table_over_distributed/test.py::test_global_in test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed [gw7] [ 3%] PASSED test_postgresql_protocol/test.py::test_java_client test_postgresql_protocol/test.py::test_psql_client [gw6] [ 4%] PASSED test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed [gw7] [ 5%] PASSED test_postgresql_protocol/test.py::test_psql_client test_postgresql_protocol/test.py::test_python_client [gw7] [ 6%] PASSED test_postgresql_protocol/test.py::test_python_client [gw3] [ 7%] PASSED test_quota/test.py::test_add_remove_interval test_quota/test.py::test_add_remove_quota [gw9] [ 8%] PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed [gw9] [ 9%] PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed [gw8] [ 10%] PASSED test_move_partition_to_volume_async/test.py::test_async_alter_move test_move_partition_to_volume_async/test.py::test_sync_alter_move test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path [gw1] [ 11%] FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication [gw4] [ 12%] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0] test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1] [gw3] [ 13%] PASSED test_quota/test.py::test_add_remove_quota test_quota/test.py::test_consumption_of_show_clusters test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication [gw4] [ 14%] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1] test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0] [gw8] [ 15%] PASSED test_move_partition_to_volume_async/test.py::test_sync_alter_move [gw3] [ 16%] PASSED test_quota/test.py::test_consumption_of_show_clusters test_quota/test.py::test_consumption_of_show_databases [gw3] [ 17%] PASSED test_quota/test.py::test_consumption_of_show_databases test_quota/test.py::test_consumption_of_show_privileges test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints [gw3] [ 18%] PASSED test_quota/test.py::test_consumption_of_show_privileges test_quota/test.py::test_consumption_of_show_processlist [gw4] [ 19%] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0] test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1] [gw3] [ 20%] PASSED test_quota/test.py::test_consumption_of_show_processlist test_quota/test.py::test_consumption_of_show_tables [gw3] [ 21%] PASSED test_quota/test.py::test_consumption_of_show_tables test_quota/test.py::test_dcl_introspection [gw4] [ 22%] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1] test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0] [gw4] [ 23%] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0] test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1] [gw5] [ 24%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where [gw2] [ 25%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1] [gw4] [ 26%] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1] [gw3] [ 27%] PASSED test_quota/test.py::test_dcl_introspection test_quota/test.py::test_dcl_management [gw5] [ 28%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where [gw1] [ 29%] FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication [gw9] [ 30%] PASSED test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value [gw5] [ 31%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree [gw2] [ 32%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0] [gw0] [ 33%] PASSED test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype test_overcommit_tracker/test.py::test_user_overcommit [gw3] [ 34%] PASSED test_quota/test.py::test_dcl_management test_quota/test.py::test_exceed_quota [gw1] [ 35%] FAILED test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_prometheus_endpoint/test.py::test_prometheus_endpoint [gw1] [ 36%] FAILED test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart [gw5] [ 37%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree [gw2] [ 38%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1] [gw3] [ 39%] PASSED test_quota/test.py::test_exceed_quota test_quota/test.py::test_query_inserts [gw0] [ 40%] PASSED test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype test_materialized_mysql_database/test.py::test_materialized_with_column_comments test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions [gw3] [ 41%] PASSED test_quota/test.py::test_query_inserts test_quota/test.py::test_quota_from_users_xml [gw0] [ 42%] PASSED test_materialized_mysql_database/test.py::test_materialized_with_column_comments test_materialized_mysql_database/test.py::test_materialized_with_enum test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery [gw1] [ 43%] FAILED test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions [gw3] [ 44%] PASSED test_quota/test.py::test_quota_from_users_xml test_quota/test.py::test_reload_users_xml_by_timer [gw2] [ 45%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0] [gw6] [ 46%] PASSED test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path test_postgresql_replica_database_engine_1/test.py::test_different_data_types [gw3] [ 47%] PASSED test_quota/test.py::test_reload_users_xml_by_timer test_quota/test.py::test_simpliest_quota test_on_cluster_timeouts/test.py::test_long_query [gw2] [ 48%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1] [gw1] [ 49%] FAILED test_postgresql_replica_database_engine_1/test.py::test_different_data_types [gw9] [ 50%] PASSED test_prometheus_endpoint/test.py::test_prometheus_endpoint [gw3] [ 51%] PASSED test_quota/test.py::test_simpliest_quota test_quota/test.py::test_tracking_quota test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished [gw8] [ 52%] PASSED test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery [gw3] [ 53%] PASSED test_quota/test.py::test_tracking_quota [gw2] [ 54%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0] [gw2] [ 55%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1] [gw0] [ 56%] PASSED test_materialized_mysql_database/test.py::test_materialized_with_enum test_materialized_mysql_database/test.py::test_multi_table_update [gw7] [ 57%] PASSED test_overcommit_tracker/test.py::test_user_overcommit [gw2] [ 58%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0] [gw1] [ 59%] PASSED test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished [gw0] [ 60%] PASSED test_materialized_mysql_database/test.py::test_multi_table_update test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster [gw2] [ 61%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1] [gw1] [ 62%] FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables [gw2] [ 63%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0] [gw7] [ 64%] PASSED test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster [gw0] [ 65%] PASSED test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 [gw2] [ 66%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1] [gw1] [ 67%] FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries [gw2] [ 68%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0] [gw0] [ 69%] PASSED test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 [gw2] [ 70%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1] [gw2] [ 71%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0] [gw1] [ 72%] FAILED test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries [gw2] [ 73%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0] test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1] test_postgresql_replica_database_engine_1/test.py::test_multiple_databases [gw2] [ 74%] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1] [gw0] [ 75%] PASSED test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 [gw1] [ 76%] FAILED test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 [gw6] [ 77%] PASSED test_on_cluster_timeouts/test.py::test_long_query [gw1] [ 78%] FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 [gw1] [ 79%] FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index [gw1] [ 80%] FAILED test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index test_postgresql_replica_database_engine_1/test.py::test_replicating_dml [gw1] [ 81%] FAILED test_postgresql_replica_database_engine_1/test.py::test_replicating_dml test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished [gw0] [ 82%] PASSED test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0] [gw0] [ 83%] PASSED test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0] test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1] [gw0] [ 84%] PASSED test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1] test_materialized_mysql_database/test.py::test_named_collections [gw1] [ 85%] FAILED test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished [gw0] [ 86%] PASSED test_materialized_mysql_database/test.py::test_named_collections test_materialized_mysql_database/test.py::test_savepoint_query test_postgresql_replica_database_engine_1/test.py::test_single_transaction [gw0] [ 87%] PASSED test_materialized_mysql_database/test.py::test_savepoint_query test_materialized_mysql_database/test.py::test_select_without_columns_5_7 [gw0] [ 88%] PASSED test_materialized_mysql_database/test.py::test_select_without_columns_5_7 test_materialized_mysql_database/test.py::test_select_without_columns_8_0 [gw1] [ 89%] FAILED test_postgresql_replica_database_engine_1/test.py::test_single_transaction test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes [gw0] [ 90%] PASSED test_materialized_mysql_database/test.py::test_select_without_columns_8_0 test_materialized_mysql_database/test.py::test_system_parts_table [gw1] [ 91%] FAILED test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots [gw0] [ 92%] PASSED test_materialized_mysql_database/test.py::test_system_parts_table test_materialized_mysql_database/test.py::test_system_tables_table [gw1] [ 93%] FAILED test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots test_postgresql_replica_database_engine_1/test.py::test_virtual_columns [gw0] [ 94%] PASSED test_materialized_mysql_database/test.py::test_system_tables_table test_materialized_mysql_database/test.py::test_table_overrides [gw1] [ 95%] FAILED test_postgresql_replica_database_engine_1/test.py::test_virtual_columns [gw0] [ 96%] PASSED test_materialized_mysql_database/test.py::test_table_overrides test_materialized_mysql_database/test.py::test_table_table [gw0] [ 97%] PASSED test_materialized_mysql_database/test.py::test_table_table test_materialized_mysql_database/test.py::test_table_with_indexes [gw0] [ 98%] PASSED test_materialized_mysql_database/test.py::test_table_with_indexes test_materialized_mysql_database/test.py::test_text_blob_charset [gw0] [ 99%] PASSED test_materialized_mysql_database/test.py::test_text_blob_charset test_materialized_mysql_database/test.py::test_utf8mb4 [gw0] [100%] PASSED test_materialized_mysql_database/test.py::test_utf8mb4 =================================== FAILURES =================================== _____________ test_abrupt_connection_loss_while_heavy_replication ______________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_abrupt_connection_loss_while_heavy_replication(started_cluster): def transaction(thread_id): if thread_id % 2: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=True, ) else: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) if thread_id % 2 == 0: conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0) threads_num = 6 threads = [] for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() # Join here because it takes time for data to reach wal time.sleep(2) started_cluster.pause_container("postgres1") # for i in range(NUM_TABLES): # result = instance.query(f"SELECT count() FROM test_database.postgresql_replica_{i}") # print(result) # Just debug started_cluster.unpause_container("postgres1") > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:752: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f9a75c99ac3 E 20. ? @ 0x00007f9a75d2b850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml PostgreSQL is available - running test ------------------------------ Captured log setup ------------------------------ 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:['docker ps | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : No running containers (conftest.py:92, cleanup_environment) 2025-06-08 17:15:56 [ 509 ] DEBUG : Pruning Docker networks (conftest.py:94, cleanup_environment) 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:['docker network prune --force'] (cluster.py:113, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:["sysctl net.ipv4.ip_local_port_range='55000 65535'"] (cluster.py:113, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:121, run_and_check) 2025-06-08 17:15:56 [ 509 ] INFO : Running tests in /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/test.py (cluster.py:2659, start) 2025-06-08 17:15:56 [ 509 ] DEBUG : Cluster start called. is_up=False (cluster.py:2666, start) 2025-06-08 17:15:56 [ 509 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:15:56 [ 509 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:15:56 [ 509 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:15:56 [ 509 ] DEBUG : Cleanup called (cluster.py:801, cleanup) 2025-06-08 17:15:56 [ 509 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:15:56 [ 509 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:15:56 [ 509 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Unstopped containers: {} (cluster.py:815, cleanup) 2025-06-08 17:15:56 [ 509 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine1 (cluster.py:829, cleanup) 2025-06-08 17:15:56 [ 509 ] DEBUG : Trying to prune unused networks... (cluster.py:835, cleanup) 2025-06-08 17:15:56 [ 509 ] DEBUG : Trying to prune unused images... (cluster.py:851, cleanup) 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Stderr:Error response from daemon: a prune operation is already running (cluster.py:123, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Trying to prune unused volumes... (cluster.py:860, cleanup) 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2025-06-08 17:15:56 [ 509 ] DEBUG : Setup directory for instance: instance (cluster.py:2679, start) 2025-06-08 17:15:56 [ 509 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4383, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Create directory for common tests configuration (cluster.py:4388, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Copy common configuration from helpers (cluster.py:4408, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Generate and write macros file (cluster.py:4441, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/configs/log_conf.xml'] to /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/configs/config.d (cluster.py:4471, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/database (cluster.py:4488, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/logs (cluster.py:4499, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4582, create_dir) 2025-06-08 17:15:56 [ 509 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'POSTGRES_PORT': '5432', 'POSTGRES_DIR': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', 'POSTGRES_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env (cluster.py:86, _create_env_file) 2025-06-08 17:15:56 [ 509 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-08 17:15:56 [ 509 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-08 17:15:56 [ 509 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-08 17:15:56 [ 509 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-08 17:15:56 [ 509 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:546, _make_request) 2025-06-08 17:15:56 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'pull'] (cluster.py:113, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling instance ... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling postgres1 ... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling instance ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling postgres1 ... pulling from library/postgres (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling postgres1 ... digest: sha256:6efd0df010dc3cb40d... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling postgres1 ... status: image is up to date for p... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling instance ... digest: sha256:8a2c68e2d63d82c826... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling instance ... status: image is up to date for a... (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling postgres1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Stderr:Pulling instance ... done (cluster.py:123, run_and_check) 2025-06-08 17:16:07 [ 509 ] DEBUG : Setup Postgres (cluster.py:2791, start) 2025-06-08 17:16:07 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', '--verbose', 'up', '-d'] (cluster.py:113, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.config.config.find: Using configuration files: /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:docker-py version: (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:CPython version: 3.10.12 (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:OpenSSL version: OpenSSL 3.0.2 15 Mar 2022 (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.6', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-05-05T21:18:13.000000000+00:00', 'Experimental': 'false', 'GitCommit': '9dbdbd4', 'GoVersion': 'go1.19.9', 'KernelVersion': '5.15.0-130-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.7.18', 'Details': {'GitCommit': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e'}}, {'Name': 'runc', 'Version': '1.7.18', 'Details': {'GitCommit': 'v1.1.13-0-g58aa920'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.6, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=9dbdbd4, GoVersion=go1.19.9, Os=linux, Arch=amd64, KernelVersion=5.15.0-130-generic, BuildTime=2023-05-05T21:18:13.000000000+00:00 (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info <- () (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'BridgeNfIp6tables': True, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'BridgeNfIptables': True, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'CPUSet': True, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'CPUShares': True, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'CgroupDriver': 'cgroupfs', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'CgroupVersion': '2', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'ContainerdCommit': {'Expected': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'ID': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e'}, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Containers': 0, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.network.ensure: Creating network "roottestpostgresqlreplicadatabaseengine1_default" with the default driver (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottestpostgresqlreplicadatabaseengine1_default', driver=None, options=None, ipam=None, internal=False, enable_ipv6=False, labels={'com.docker.compose.project': 'roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network -> {'Id': 'ff84dbbf26b24d3a254d7f85df542e159c4e0cb29b72957e519a7060e660c6bd', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Warning': ''} (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {} (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1)} (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 93968ec629d91980ea64eb0c7d74531b9606057f28d2f63fd9c8a6b21e104bb1 (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestpostgresqlreplicadatabaseengine1_default', devices=None, device_requests=None, dns=None, dns_opt=None, dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=None, cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=None, ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/postgres/', 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Target': '/postgres/', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Type': 'bind'}], (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'NetworkMode': 'roottestpostgresqlreplicadatabaseengine1_default', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'PortBindings': {}, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'RestartPolicy': {'MaximumRetryCount': 0, 'Name': 'always'}, (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (command=['postgres', '-c', 'wal_level=logical', '-c', 'max_replication_slots=4', '-c', 'logging_collector=on', '-c', 'log_directory=/postgres/logs', '-c', 'log_filename=postgresql.log', '-c', 'log_statement=all', '-c', 'max_connections=200'], environment=['POSTGRES_HOST_AUTH_METHOD=trust', 'POSTGRES_PASSWORD=mysecretpassword', 'PGDATA=/postgres/data'], healthcheck={'test': ['CMD-SHELL', 'pg_isready -U postgres'], 'interval': 10000000000, 'timeout': 5000000000, 'retries': 5}, image='postgres', volumes={}, name='roottestpostgresqlreplicadatabaseengine1_postgres1_1', detach=True, ports=['5432'], labels={'com.docker.compose.project': 'roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service': 'postgres1', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_postgres.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '93968ec629d91980ea64eb0c7d74531b9606057f28d2f63fd9c8a6b21e104bb1'}, host_config={'NetworkMode': 'roottestpostgresqlreplicadatabaseengine1_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/postgres/', 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestpostgresqlreplicadatabaseengine1_default': {'Aliases': ['postgres1', 'postgre-sql.local'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'e20f80d72fa474f71913b79ba659c945472246a813913e00753ea0c27c6b8900', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('e20f80d72fa474f71913b79ba659c945472246a813913e00753ea0c27c6b8900') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'Args': ['postgres', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'wal_level=logical', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'max_replication_slots=4', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'logging_collector=on', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr: 'log_directory=/postgres/logs', (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('e20f80d72fa474f71913b79ba659c945472246a813913e00753ea0c27c6b8900', 'roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('e20f80d72fa474f71913b79ba659c945472246a813913e00753ea0c27c6b8900', 'roottestpostgresqlreplicadatabaseengine1_default', aliases=['e20f80d72fa4', 'postgres1', 'postgre-sql.local'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('e20f80d72fa474f71913b79ba659c945472246a813913e00753ea0c27c6b8900') (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1) (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:16:08 [ 509 ] DEBUG : get_instance_ip instance_name=postgres1 (cluster.py:2008, get_instance_ip) 2025-06-08 17:16:08 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_postgres1_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:08 [ 509 ] DEBUG : Can't connect to Postgres connection to server at "172.16.2.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:16:09 [ 509 ] DEBUG : Can't connect to Postgres connection to server at "172.16.2.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:16:09 [ 509 ] DEBUG : Can't connect to Postgres connection to server at "172.16.2.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:16:10 [ 509 ] DEBUG : Postgres Started (cluster.py:2248, wait_postgres_to_start) 2025-06-08 17:16:10 [ 509 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker-compose --env-file /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env --project-name roottestpostgresqlreplicadatabaseengine1 --file /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml up -d --no-recreate') (cluster.py:3002, start) 2025-06-08 17:16:10 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'up', '-d', '--no-recreate'] (cluster.py:113, run_and_check) 2025-06-08 17:16:10 [ 509 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:16:10 [ 509 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:16:10 [ 509 ] DEBUG : ClickHouse instance created (cluster.py:3010, start) 2025-06-08 17:16:10 [ 509 ] DEBUG : get_instance_ip instance_name=instance (cluster.py:2008, get_instance_ip) 2025-06-08 17:16:10 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_instance_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:10 [ 509 ] DEBUG : Waiting for ClickHouse start in instance, ip: 172.16.2.3... (cluster.py:3017, start) 2025-06-08 17:16:10 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_instance_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:10 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:10 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:11 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:12 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:13 [ 509 ] DEBUG : http://localhost:None "GET /v1.42/containers/a0002fecac9b881fb070e719748ccfc772a2a4f8e2827dcdef15d7715ed5c390/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:16:13 [ 509 ] DEBUG : ClickHouse instance started (cluster.py:3021, start) 2025-06-08 17:16:13 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:13 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:16:13 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:13 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:16:13 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:16:19 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'pause', 'postgres1'] (cluster.py:113, run_and_check) 2025-06-08 17:16:19 [ 509 ] DEBUG : Stderr:Pausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:16:19 [ 509 ] DEBUG : Stderr:Pausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:16:19 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'unpause', 'postgres1'] (cluster.py:113, run_and_check) 2025-06-08 17:16:19 [ 509 ] DEBUG : Stderr:Unpausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:16:19 [ 509 ] DEBUG : Stderr:Unpausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:16:19 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:20 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:16:20 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:16:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:16:22 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:22 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:22 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ______________ test_abrupt_server_restart_while_heavy_replication ______________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_abrupt_server_restart_while_heavy_replication(started_cluster): def transaction(thread_id): if thread_id % 2: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=True, ) else: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) if thread_id % 2 == 0: conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(tables_num=NUM_TABLES, numbers=0) threads = [] threads_num = 6 for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() # Join here because it takes time for data to reach wal instance.restart_clickhouse() > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:820: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:16:22 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:23 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:16:23 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:16:26 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:26 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:16:26 [ 509 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2025-06-08 17:16:26 [ 509 ] DEBUG : Stdout: 8 ? 00:00:07 clickhouse (cluster.py:121, run_and_check) 2025-06-08 17:16:26 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:26 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:16:26 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:26 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:27 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:16:28 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:28 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:28 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:16:29 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:29 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:29 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:16:30 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:30 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:30 [ 509 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:16:30 [ 509 ] DEBUG : Stdout:741 (cluster.py:121, run_and_check) 2025-06-08 17:16:31 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:31 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:31 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:31 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:31 [ 509 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3817, start_clickhouse) 2025-06-08 17:16:31 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:31 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2025-06-08 17:16:32 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:32 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:32 [ 509 ] DEBUG : Stdout:777 (cluster.py:121, run_and_check) 2025-06-08 17:16:32 [ 509 ] DEBUG : Clickhouse process running. (cluster.py:3828, start_clickhouse) 2025-06-08 17:16:32 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:16:32 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:16:32 [ 509 ] DEBUG : Stdout:777 (cluster.py:121, run_and_check) 2025-06-08 17:16:32 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:16:33 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:16:33 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:16:34 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:34 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:16:34 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:16:34 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:16:35 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:35 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:35 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _____________________ test_changing_replica_identity_value _____________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_changing_replica_identity_value(started_cluster): pg_manager.create_postgres_table("postgresql_replica") instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT 50 + number, number from numbers(50)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT 100 + number, number from numbers(50)" ) > check_tables_are_synchronized(instance, "postgresql_replica") test_postgresql_replica_database_engine_1/test.py:292: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica' in scope SELECT * FROM `test_database.postgresql_replica` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica exists in test_database Checking table is synchronized: test_database.postgresql_replica ------------------------------ Captured log call ------------------------------- 2025-06-08 17:16:35 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:36 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:36 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:16:36 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:16:36 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT 100 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:36 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:37 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:16:37 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:16:37 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:16:37 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:38 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:38 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_clickhouse_restart ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_clickhouse_restart(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:303: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:16:38 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:38 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:39 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:39 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:39 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:16:39 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:39 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:16:40 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:16:40 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:40 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:16:40 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:16:41 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:16:41 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:41 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:41 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_concurrent_transactions _________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_concurrent_transactions(started_cluster): def transaction(thread_id): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0) threads = [] threads_num = 6 for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() for i in range(NUM_TABLES): > check_tables_are_synchronized(instance, f"postgresql_replica_{i}") test_postgresql_replica_database_engine_1/test.py:691: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:16:41 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:42 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:16:42 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:16:45 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:45 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:16:46 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:16:46 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:16:48 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:48 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:48 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) __________________________ test_different_data_types ___________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_different_data_types(started_cluster): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() cursor.execute("drop table if exists test_data_types;") cursor.execute("drop table if exists test_array_data_type;") cursor.execute( """CREATE TABLE test_data_types ( id integer PRIMARY KEY, a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, h timestamp, i date, j decimal(5, 5), k numeric(5, 5))""" ) cursor.execute( """CREATE TABLE test_array_data_type ( key Integer NOT NULL PRIMARY KEY, a Date[] NOT NULL, -- Date b Timestamp[] NOT NULL, -- DateTime64(6) c real[][] NOT NULL, -- Float32 d double precision[][] NOT NULL, -- Float64 e decimal(5, 5)[][][] NOT NULL, -- Decimal32 f integer[][][] NOT NULL, -- Int32 g Text[][][][][] NOT NULL, -- String h Integer[][][], -- Nullable(Int32) i Char(2)[][][][], -- Nullable(String) k Char(2)[] -- Nullable(String) )""" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for i in range(10): instance.query( """ INSERT INTO postgres_database.test_data_types VALUES ({}, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2)""".format( i ) ) > check_tables_are_synchronized(instance, "test_data_types", "id") test_postgresql_replica_database_engine_1/test.py:170: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_data_types' in scope SELECT * FROM `test_database.test_data_types` ORDER BY id ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_data_types` order by id;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Checking table test_data_types exists in test_database Checking table is synchronized: test_database.test_data_types ------------------------------ Captured log call ------------------------------- 2025-06-08 17:16:49 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:49 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:16:49 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:16:49 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (0, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:50 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (1, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:50 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (2, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:50 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (3, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:50 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (4, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:50 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (5, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:51 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (6, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:51 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (7, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:51 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (8, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:51 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (9, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:16:51 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:16:51 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`test_data_types` order by id; on instance (cluster.py:3455, query) 2025-06-08 17:16:52 [ 509 ] DEBUG : Executing query select * from `test_database.test_data_types` order by id; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:16:52 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:16:52 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:53 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:16:53 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ____________________ test_load_and_sync_all_database_tables ____________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_load_and_sync_all_database_tables(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:09 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:10 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:10 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:10 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:10 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:10 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:11 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:11 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:11 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:11 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:11 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:12 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:12 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:12 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:12 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________ test_load_and_sync_subset_of_database_tables _________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_load_and_sync_subset_of_database_tables(started_cluster): NUM_TABLES = 10 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) publication_tables = "" for i in range(NUM_TABLES): if i < int(NUM_TABLES / 2): if publication_tables != "": publication_tables += ", " publication_tables += f"postgresql_replica_{i}" pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ "materialized_postgresql_tables_list = '{}'".format(publication_tables) ], ) time.sleep(1) for i in range(int(NUM_TABLES / 2)): table_name = f"postgresql_replica_{i}" assert_nested_table_is_created(instance, table_name) result = instance.query( """SELECT count() FROM system.tables WHERE database = 'test_database';""" ) assert int(result) == int(NUM_TABLES / 2) database_tables = instance.query("SHOW TABLES FROM test_database") for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) if i < int(NUM_TABLES / 2): assert table_name in database_tables else: assert table_name not in database_tables instance.query( "INSERT INTO postgres_database.{} SELECT 50 + number, {} from numbers(100)".format( table_name, i ) ) for i in range(NUM_TABLES): table_name = f"postgresql_replica_{i}" if i < int(NUM_TABLES / 2): > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:276: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_6" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_7" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_8" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_9" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table postgresql_replica_1 exists in test_database Checking table postgresql_replica_2 exists in test_database Checking table postgresql_replica_3 exists in test_database Checking table postgresql_replica_4 exists in test_database Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:12 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:13 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:13 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:13 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:13 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:13 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_5` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:13 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_6` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:14 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_7` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:14 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_8` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:14 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_9` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:14 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:14 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'postgresql_replica_0, postgresql_replica_1, postgresql_replica_2, postgresql_replica_3, postgresql_replica_4' on instance (cluster.py:3455, query) 2025-06-08 17:17:15 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:16 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:16 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:16 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:16 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:17 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:17 [ 509 ] DEBUG : Executing query SELECT count() FROM system.tables WHERE database = 'test_database'; on instance (cluster.py:3455, query) 2025-06-08 17:17:17 [ 509 ] DEBUG : Executing query SHOW TABLES FROM test_database on instance (cluster.py:3455, query) 2025-06-08 17:17:17 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT 50 + number, 0 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:17 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT 50 + number, 1 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT 50 + number, 2 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT 50 + number, 3 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT 50 + number, 4 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_5 SELECT 50 + number, 5 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_6 SELECT 50 + number, 6 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_7 SELECT 50 + number, 7 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:19 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_8 SELECT 50 + number, 8 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:19 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_9 SELECT 50 + number, 9 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:17:19 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:19 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:19 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:20 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_many_concurrent_queries _________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_many_concurrent_queries(started_cluster): table = "test_many_conc" query_pool = [ "DELETE FROM {} WHERE (value*value) % 3 = 0;", "UPDATE {} SET value = value - 125 WHERE key % 2 = 0;", "DELETE FROM {} WHERE key % 10 = 0;", "UPDATE {} SET value = value*5 WHERE key % 2 = 1;", "DELETE FROM {} WHERE value % 2 = 0;", "UPDATE {} SET value = value + 2000 WHERE key % 5 = 0;", "DELETE FROM {} WHERE value % 3 = 0;", "UPDATE {} SET value = value * 2 WHERE key % 3 = 0;", "DELETE FROM {} WHERE value % 9 = 2;", "UPDATE {} SET value = value + 2 WHERE key % 3 = 1;", "DELETE FROM {} WHERE value%5 = 0;", ] NUM_TABLES = 5 conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() pg_manager.create_and_fill_postgres_tables( NUM_TABLES, numbers=10000, table_name_base=table ) def attack(thread_id): print("thread {}".format(thread_id)) k = 10000 for i in range(20): query_id = random.randrange(0, len(query_pool) - 1) table_id = random.randrange(0, 5) # num tables random_table_name = f"{table}_{table_id}" table_name = f"{table}_{thread_id}" # random update / delete query cursor.execute(query_pool[query_id].format(random_table_name)) print( "Executing for table {} query: {}".format( random_table_name, query_pool[query_id] ) ) # allow some thread to do inserts (not to violate key constraints) if thread_id < 5: print("try insert table {}".format(thread_id)) instance.query( "INSERT INTO postgres_database.{} SELECT {}*10000*({} + number), number from numbers(1000)".format( table_name, thread_id, k ) ) k += 1 print("insert table {} ok".format(thread_id)) if i == 5: # also change primary key value print("try update primary key {}".format(thread_id)) cursor.execute( "UPDATE {table}_{} SET key=key%100000+100000*{} WHERE key%{}=0".format( table_name, i + 1, i + 1 ) ) print("update primary key {} ok".format(thread_id)) n = [10000] threads = [] threads_num = 16 for i in range(threads_num): threads.append(threading.Thread(target=attack, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 1)) thread.start() n[0] = 50000 for table_id in range(NUM_TABLES): n[0] += 1 table_name = f"{table}_{table_id}" instance.query( "INSERT INTO postgres_database.{} SELECT {} + number, number from numbers(5000)".format( table_name, n[0] ) ) # cursor.execute("UPDATE {table}_{} SET key=key%100000+100000*{} WHERE key%{}=0".format(table_id, table_id+1, table_id+1)) for thread in threads: thread.join() for i in range(NUM_TABLES): table_name = f"{table}_{i}" > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:492: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_many_conc_0' in scope SELECT * FROM `test_database.test_many_conc_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_many_conc_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_many_conc_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0 Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; try insert table 0 thread 1 Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; try insert table 1 insert table 1 ok Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; try insert table 1 thread 2 Executing for table test_many_conc_2 query: DELETE FROM {} WHERE key % 10 = 0; try insert table 2 insert table 2 ok Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; try insert table 2 thread 3 Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; try insert table 3 insert table 3 ok Executing for table test_many_conc_4 query: DELETE FROM {} WHERE (value*value) % 3 = 0; try insert table 3 thread 4 Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; try insert table 4 insert table 4 ok Executing for table test_many_conc_1 query: DELETE FROM {} WHERE (value*value) % 3 = 0; try insert table 4 thread 5 Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; thread 6 Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 2 = 0; thread 7 Executing for table test_many_conc_0 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 8 Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; thread 9 Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; thread 10 Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; thread 11 Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; thread 12 Executing for table test_many_conc_1 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 13 Executing for table test_many_conc_3 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 14 Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; thread 15 Executing for table test_many_conc_1 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 9 = 2; Checking table test_many_conc_0 exists in test_database Checking table is synchronized: test_database.test_many_conc_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:20 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_0` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:17:21 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_1` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:17:21 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_2` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:17:21 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_3` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:17:21 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_4` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:17:21 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:22 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:22 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:23 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_0 SELECT 0*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:24 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 1*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:24 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 1*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:25 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:25 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:26 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 3*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:26 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 3*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:27 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 4*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:27 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 4*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:33 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_0 SELECT 50001 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:17:33 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 50002 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:17:34 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 50003 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:17:34 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 50004 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:17:34 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 50005 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:17:34 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:35 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`test_many_conc_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:35 [ 509 ] DEBUG : Executing query select * from `test_database.test_many_conc_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:35 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:35 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:36 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:36 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_multiple_databases ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_multiple_databases(started_cluster): NUM_TABLES = 5 conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=False, ) pg_manager.create_postgres_db("postgres_database_1") pg_manager.create_postgres_db("postgres_database_2") conn1 = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, database_name="postgres_database_1", ) conn2 = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, database_name="postgres_database_2", ) cursor1 = conn1.cursor() cursor2 = conn2.cursor() pg_manager.create_clickhouse_postgres_db( "postgres_database_1", "", "postgres_database_1", ) pg_manager.create_clickhouse_postgres_db( "postgres_database_2", "", "postgres_database_2", ) cursors = [cursor1, cursor2] for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) create_postgres_table(cursors[cursor_id], table_name) instance.query( "INSERT INTO postgres_database_{}.{} SELECT number, number from numbers(50)".format( cursor_id + 1, table_name ) ) print( "database 1 tables: ", instance.query( """SELECT name FROM system.tables WHERE database = 'postgres_database_1';""" ), ) print( "database 2 tables: ", instance.query( """SELECT name FROM system.tables WHERE database = 'postgres_database_2';""" ), ) pg_manager.create_materialized_db( started_cluster.postgres_ip, started_cluster.postgres_port, "test_database_1", "postgres_database_1", ) pg_manager.create_materialized_db( started_cluster.postgres_ip, started_cluster.postgres_port, "test_database_2", "postgres_database_2", ) cursors = [cursor1, cursor2] for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) instance.query( "INSERT INTO postgres_database_{}.{} SELECT 50 + number, number from numbers(50)".format( cursor_id + 1, table_name ) ) for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) > check_tables_are_synchronized( instance, table_name, "key", "postgres_database_{}".format(cursor_id + 1), "test_database_{}".format(cursor_id + 1), ) test_postgresql_replica_database_engine_1/test.py:648: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database_1.postgresql_replica_0' in scope SELECT * FROM `test_database_1.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database_1.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) database 1 tables: postgresql_replica_0 postgresql_replica_1 postgresql_replica_2 postgresql_replica_3 postgresql_replica_4 database 2 tables: postgresql_replica_0 postgresql_replica_1 postgresql_replica_2 postgresql_replica_3 postgresql_replica_4 Checking table postgresql_replica_0 exists in test_database_1 Checking table is synchronized: test_database_1.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:36 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_1" on instance (cluster.py:3455, query) 2025-06-08 17:17:36 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database_1" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database_1', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:36 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_2" on instance (cluster.py:3455, query) 2025-06-08 17:17:36 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database_2" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database_2', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:37 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_0 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:37 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_1 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:37 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_2 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:37 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_3 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:37 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_4 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:38 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_0 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:38 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_1 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:38 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_2 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:38 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_3 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:38 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_4 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:38 [ 509 ] DEBUG : Executing query SELECT name FROM system.tables WHERE database = 'postgres_database_1'; on instance (cluster.py:3455, query) 2025-06-08 17:17:39 [ 509 ] DEBUG : Executing query SELECT name FROM system.tables WHERE database = 'postgres_database_2'; on instance (cluster.py:3455, query) 2025-06-08 17:17:39 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_1` on instance (cluster.py:3455, query) 2025-06-08 17:17:39 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database_1` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database_1', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:39 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:39 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_2` on instance (cluster.py:3455, query) 2025-06-08 17:17:40 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database_2` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database_2', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:40 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:40 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_0 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:40 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_1 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:40 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_2 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:41 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_3 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:41 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_4 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:41 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_0 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:41 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_1 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:41 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_2 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:42 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_3 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:42 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_4 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:42 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database_1` on instance (cluster.py:3455, query) 2025-06-08 17:17:42 [ 509 ] DEBUG : Executing query select * from `postgres_database_1`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:42 [ 509 ] DEBUG : Executing query select * from `test_database_1.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:43 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_2` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:43 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_1` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:43 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_1" on instance (cluster.py:3455, query) 2025-06-08 17:17:43 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:43 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_2" on instance (cluster.py:3455, query) 2025-06-08 17:17:44 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:44 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ________________________________ test_quoting_1 ________________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_quoting_1(started_cluster): table_name = "user" pg_manager.create_and_fill_postgres_table(table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:829: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.user' in scope SELECT * FROM `test_database.user` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.user` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "user" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table user exists in test_database Checking table is synchronized: test_database.user ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:44 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`user` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:44 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:45 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:45 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:45 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:45 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`user` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:45 [ 509 ] DEBUG : Executing query select * from `test_database.user` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:45 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:46 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:46 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:46 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ________________________________ test_quoting_2 ________________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_quoting_2(started_cluster): table_name = "user" pg_manager.create_and_fill_postgres_table(table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[f"materialized_postgresql_tables_list = '{table_name}'"], ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:840: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.user' in scope SELECT * FROM `test_database.user` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.user` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "user" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table user exists in test_database Checking table is synchronized: test_database.user ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:46 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`user` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:46 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:47 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'user' on instance (cluster.py:3455, query) 2025-06-08 17:17:47 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:47 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:48 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`user` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:48 [ 509 ] DEBUG : Executing query select * from `test_database.user` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:48 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:49 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:49 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:49 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_replica_identity_index __________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_replica_identity_index(started_cluster): pg_manager.create_postgres_table( "postgresql_replica", template=postgres_table_template_3 ) pg_manager.execute("CREATE unique INDEX idx on postgresql_replica(key1, key2);") pg_manager.execute( "ALTER TABLE postgresql_replica REPLICA IDENTITY USING INDEX idx" ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(50, 10)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(100, 10)" ) > check_tables_are_synchronized(instance, "postgresql_replica", order_by="key1") test_postgresql_replica_database_engine_1/test.py:334: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica' in scope SELECT * FROM `test_database.postgresql_replica` ORDER BY key1 ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica` order by key1;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica" ( key1 Integer NOT NULL, value1 Integer, key2 Integer NOT NULL, value2 Integer NOT NULL) Checking table postgresql_replica exists in test_database Checking table is synchronized: test_database.postgresql_replica ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:49 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(50, 10) on instance (cluster.py:3455, query) 2025-06-08 17:17:49 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:50 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:50 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:50 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(100, 10) on instance (cluster.py:3455, query) 2025-06-08 17:17:50 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:50 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica` order by key1; on instance (cluster.py:3455, query) 2025-06-08 17:17:51 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica` order by key1; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:51 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:51 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:51 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:51 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _____________________________ test_replicating_dml _____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_replicating_dml(started_cluster): NUM_TABLES = 5 for i in range(NUM_TABLES): pg_manager.create_postgres_table(f"postgresql_replica_{i}") instance.query( "INSERT INTO postgres_database.postgresql_replica_{} SELECT number, {} from numbers(50)".format( i, i ) ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for i in range(NUM_TABLES): instance.query( f"INSERT INTO postgres_database.postgresql_replica_{i} SELECT 50 + number, {i} from numbers(1000)" ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:100: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007ff7a113bac3 E 20. ? @ 0x00007ff7a11cd850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:52 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT number, 0 from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:52 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT number, 1 from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:52 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT number, 2 from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:52 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT number, 3 from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:53 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT number, 4 from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:17:53 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:53 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:53 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:17:53 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT 50 + number, 0 from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:54 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT 50 + number, 1 from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:54 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT 50 + number, 2 from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:54 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT 50 + number, 3 from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:54 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT 50 + number, 4 from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:17:55 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:55 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:17:55 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:17:55 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:17:55 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:56 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:17:56 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) __________ test_restart_server_while_replication_startup_not_finished __________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_restart_server_while_replication_startup_not_finished(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, 100000) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) time.sleep(1) instance.restart_clickhouse() > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:774: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f94c93c9ac3 E 20. ? @ 0x00007f94c945b850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:17:56 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(100000) on instance (cluster.py:3455, query) 2025-06-08 17:17:57 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(100000) on instance (cluster.py:3455, query) 2025-06-08 17:17:57 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(100000) on instance (cluster.py:3455, query) 2025-06-08 17:17:58 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(100000) on instance (cluster.py:3455, query) 2025-06-08 17:17:58 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(100000) on instance (cluster.py:3455, query) 2025-06-08 17:17:59 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:17:59 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:17:59 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:18:00 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:00 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:18:01 [ 509 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2025-06-08 17:18:01 [ 509 ] DEBUG : Stdout: 777 ? 00:00:29 clickhouse (cluster.py:121, run_and_check) 2025-06-08 17:18:01 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:01 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:18:01 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:01 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:18:01 [ 509 ] DEBUG : Stdout:777 (cluster.py:121, run_and_check) 2025-06-08 17:18:02 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:02 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:18:02 [ 509 ] DEBUG : Stdout:777 (cluster.py:121, run_and_check) 2025-06-08 17:18:03 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:03 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:18:03 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:03 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:18:03 [ 509 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3817, start_clickhouse) 2025-06-08 17:18:03 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:03 [ 509 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2025-06-08 17:18:04 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:04 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:18:04 [ 509 ] DEBUG : Stdout:1548 (cluster.py:121, run_and_check) 2025-06-08 17:18:04 [ 509 ] DEBUG : Clickhouse process running. (cluster.py:3828, start_clickhouse) 2025-06-08 17:18:04 [ 509 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:18:04 [ 509 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:18:04 [ 509 ] DEBUG : Stdout:1548 (cluster.py:121, run_and_check) 2025-06-08 17:18:04 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:18:05 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:18:06 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:18:06 [ 509 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:18:06 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:07 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:18:07 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:18:07 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:18:07 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:08 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:08 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_single_transaction ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_single_transaction(started_cluster): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() table_name = "postgresql_replica_0" create_postgres_table(cursor, table_name) conn.commit() pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) assert_nested_table_is_created(instance, table_name) for query in queries: print("query {}".format(query)) cursor.execute(query.format(0)) time.sleep(5) result = instance.query(f"select count() from test_database.{table_name}") # no commit yet assert int(result) == 0 conn.commit() > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:531: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f94c93c9ac3 E 20. ? @ 0x00007f94c945b850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; query UPDATE postgresql_replica_{} SET key=key+10000000 query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:18:08 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:08 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:18:08 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:18:09 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:14 [ 509 ] DEBUG : Executing query select count() from test_database.postgresql_replica_0 on instance (cluster.py:3455, query) 2025-06-08 17:18:15 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:15 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:18:15 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:18:15 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:18:16 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:16 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:16 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) __________________________ test_table_schema_changes ___________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_table_schema_changes(started_cluster): NUM_TABLES = 5 for i in range(NUM_TABLES): pg_manager.create_postgres_table( f"postgresql_replica_{i}", template=postgres_table_template_2 ) instance.query( f"INSERT INTO postgres_database.postgresql_replica_{i} SELECT number, {i}, {i}, {i} from numbers(25)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, ) for i in range(NUM_TABLES): instance.query( f"INSERT INTO postgres_database.postgresql_replica_{i} SELECT 25 + number, {i}, {i}, {i} from numbers(25)" ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:371: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f94c93c9ac3 E 20. ? @ 0x00007f94c945b850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value1 Integer, value2 Integer, value3 Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value1 Integer, value2 Integer, value3 Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value1 Integer, value2 Integer, value3 Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value1 Integer, value2 Integer, value3 Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value1 Integer, value2 Integer, value3 Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:18:16 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT number, 0, 0, 0 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:17 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT number, 1, 1, 1 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:17 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT number, 2, 2, 2 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:17 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT number, 3, 3, 3 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:17 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT number, 4, 4, 4 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:17 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:18 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:18:18 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:18:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT 25 + number, 0, 0, 0 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT 25 + number, 1, 1, 1 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:18 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT 25 + number, 2, 2, 2 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:19 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT 25 + number, 3, 3, 3 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:19 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT 25 + number, 4, 4, 4 from numbers(25) on instance (cluster.py:3455, query) 2025-06-08 17:18:19 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:19 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:18:19 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:18:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:18:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:20 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:20 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_user_managed_slots ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_user_managed_slots(started_cluster): slot_name = "user_slot" table_name = "test_table" pg_manager.create_and_fill_postgres_table(table_name) replication_connection = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, replication=True, auto_commit=True, ) snapshot = create_replication_slot(replication_connection, slot_name=slot_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ f"materialized_postgresql_replication_slot = '{slot_name}'", f"materialized_postgresql_snapshot = '{snapshot}'", ], ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:865: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_table' in scope SELECT * FROM `test_database.test_table` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f94c93c9ac3 E 20. ? @ 0x00007f94c945b850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_table` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_table" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) user_slot 0/4714F578 000000DB-0000000E-1 Checking table test_table exists in test_database Checking table is synchronized: test_database.test_table ------------------------------ Captured log call ------------------------------- 2025-06-08 17:18:20 [ 509 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_table` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:18:21 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:21 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_replication_slot = 'user_slot', materialized_postgresql_snapshot = '000000DB-0000000E-1' on instance (cluster.py:3455, query) 2025-06-08 17:18:21 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:18:21 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:21 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`test_table` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:18:21 [ 509 ] DEBUG : Executing query select * from `test_database.test_table` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:18:22 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:18:22 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:22 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:22 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _____________________________ test_virtual_columns _____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_virtual_columns(started_cluster): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() table_name = "postgresql_replica_0" create_postgres_table(cursor, table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, ) assert_nested_table_is_created(instance, table_name) instance.query( f"INSERT INTO postgres_database.{table_name} SELECT number, number from numbers(10)" ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:553: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.2.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f94c93c9ac3 E 20. ? @ 0x00007f94c945b850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:18:23 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:23 [ 509 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:18:23 [ 509 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:18:23 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:23 [ 509 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT number, number from numbers(10) on instance (cluster.py:3455, query) 2025-06-08 17:18:23 [ 509 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:18:24 [ 509 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:18:24 [ 509 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:18:24 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:18:24 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:25 [ 509 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:18:25 [ 509 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.2.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:18:25 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2025-06-08 17:18:26 [ 509 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:18:26 [ 509 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:18:26 [ 509 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:18:26 [ 509 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:18:26 [ 509 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/logs/stderr.log* || true'] (cluster.py:113, run_and_check) 2025-06-08 17:18:26 [ 509 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stderr:Removing network roottestpostgresqlreplicadatabaseengine1_default (cluster.py:123, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Cleanup called (cluster.py:801, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:18:27 [ 509 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:18:27 [ 509 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:18:27 [ 509 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Unstopped containers: {} (cluster.py:815, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine1 (cluster.py:829, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : Trying to prune unused networks... (cluster.py:835, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : Trying to prune unused images... (cluster.py:851, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Images pruned (cluster.py:854, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : Trying to prune unused volumes... (cluster.py:860, cleanup) 2025-06-08 17:18:27 [ 509 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:18:27 [ 509 ] DEBUG : Stdout:3 (cluster.py:121, run_and_check) =============================== warnings summary =============================== test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries /usr/local/lib/python3.10/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-36 (attack) Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/test.py", line 433, in attack cursor.execute(query_pool[query_id].format(random_table_name)) psycopg2.errors.NumericValueOutOfRange: integer out of range warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 /usr/local/lib/python3.10/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-1 (insert) Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/ClickHouse/tests/integration/test_materialized_mysql_database/materialized_with_ddl.py", line 1496, in insert mysql_node.query(query) File "/ClickHouse/tests/integration/test_materialized_mysql_database/test.py", line 102, in query cursor.execute(execution_query) File "/usr/local/lib/python3.10/dist-packages/pymysql/cursors.py", line 153, in execute result = self._query(query) File "/usr/local/lib/python3.10/dist-packages/pymysql/cursors.py", line 322, in _query conn.query(q) File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 563, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 825, in _read_query_result result.read() File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 1199, in read first_packet = self.connection._read_packet() File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 744, in _read_packet packet_header = self._read_bytes(4) File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 798, in _read_bytes raise err.OperationalError( pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query') warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 /usr/local/lib/python3.10/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-2 (insert) Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/ClickHouse/tests/integration/test_materialized_mysql_database/materialized_with_ddl.py", line 1496, in insert mysql_node.query(query) File "/ClickHouse/tests/integration/test_materialized_mysql_database/test.py", line 102, in query cursor.execute(execution_query) File "/usr/local/lib/python3.10/dist-packages/pymysql/cursors.py", line 153, in execute result = self._query(query) File "/usr/local/lib/python3.10/dist-packages/pymysql/cursors.py", line 322, in _query conn.query(q) File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 563, in query self._affected_rows = self._read_query_result(unbuffered=unbuffered) File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 825, in _read_query_result result.read() File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 1199, in read first_packet = self.connection._read_packet() File "/usr/local/lib/python3.10/dist-packages/pymysql/connections.py", line 775, in _read_packet packet.raise_for_error() File "/usr/local/lib/python3.10/dist-packages/pymysql/protocol.py", line 219, in raise_for_error err.raise_mysql_exception(self._data) File "/usr/local/lib/python3.10/dist-packages/pymysql/err.py", line 150, in raise_mysql_exception raise errorclass(errno, errval) pymysql.err.OperationalError: (1053, 'Server shutdown in progress') warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ============================== slowest durations =============================== 34.63s setup test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list 31.81s setup test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams 30.83s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0] 29.19s setup test_on_cluster_timeouts/test.py::test_long_query 26.06s call test_on_cluster_timeouts/test.py::test_long_query 22.54s teardown test_move_partition_to_volume_async/test.py::test_sync_alter_move 22.21s setup test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0] 21.99s teardown test_postgresql_protocol/test.py::test_python_client 21.87s setup test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node 20.43s setup test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path 20.02s call test_materialized_mysql_database/test.py::test_materialized_with_enum 19.37s setup test_move_partition_to_volume_async/test.py::test_async_alter_move 17.51s setup test_postgresql_protocol/test.py::test_java_client 17.17s setup test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 16.89s setup test_quota/test.py::test_add_remove_interval 16.65s setup test_merge_table_over_distributed/test.py::test_filtering 16.09s call test_materialized_mysql_database/test.py::test_table_overrides 15.87s call test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished 15.55s call test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 15.20s call test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 14.75s call test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 14.43s setup test_overcommit_tracker/test.py::test_user_overcommit 13.93s call test_overcommit_tracker/test.py::test_user_overcommit 12.07s setup test_prometheus_endpoint/test.py::test_prometheus_endpoint 11.83s call test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 11.08s call test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished 11.00s call test_materialized_mysql_database/test.py::test_utf8mb4 10.43s teardown test_materialized_mysql_database/test.py::test_utf8mb4 10.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1] 8.98s call test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 8.86s call test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 8.84s teardown test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1] 8.15s setup test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints 7.33s call test_postgresql_replica_database_engine_1/test.py::test_single_transaction 7.13s call test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 7.11s call test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path 6.96s call test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 6.73s teardown test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed 6.51s call test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 5.90s teardown test_on_cluster_timeouts/test.py::test_long_query 5.84s call test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list 5.66s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1] 5.37s setup test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster 5.29s teardown test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed 5.18s teardown test_prometheus_endpoint/test.py::test_prometheus_endpoint 5.09s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1] 5.05s call test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype 5.03s call test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 5.00s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0] 4.97s call test_quota/test.py::test_add_remove_interval 4.96s setup test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery 4.89s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0] 4.84s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1] 4.84s call test_materialized_mysql_database/test.py::test_multi_table_update 4.80s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0] 4.76s teardown test_quota/test.py::test_tracking_quota 4.73s teardown test_overcommit_tracker/test.py::test_user_overcommit 4.69s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0] 4.61s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1] 4.41s call test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 4.36s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0] 4.36s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1] 4.33s call test_materialized_mysql_database/test.py::test_select_without_columns_5_7 4.30s call test_move_partition_to_volume_async/test.py::test_sync_alter_move 4.29s call test_materialized_mysql_database/test.py::test_select_without_columns_8_0 4.26s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0] 4.22s call test_materialized_mysql_database/test.py::test_named_collections 4.21s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0] 4.18s call test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery 4.16s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1] 4.16s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1] 4.13s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1] 4.11s call test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0] 4.01s teardown test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster 3.97s call test_quota/test.py::test_dcl_management 3.89s call test_quota/test.py::test_dcl_introspection 3.82s call test_move_partition_to_volume_async/test.py::test_async_alter_move 3.73s teardown test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints 3.73s call test_quota/test.py::test_add_remove_quota 3.71s call test_materialized_mysql_database/test.py::test_text_blob_charset 3.55s call test_postgresql_replica_database_engine_1/test.py::test_replicating_dml 3.32s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 3.24s call test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0] 3.21s call test_postgresql_replica_database_engine_1/test.py::test_different_data_types 3.18s call test_quota/test.py::test_reload_users_xml_by_timer 3.16s call test_quota/test.py::test_exceed_quota 3.14s call test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes 3.01s call test_materialized_mysql_database/test.py::test_materialized_with_column_comments 2.99s teardown test_postgresql_replica_database_engine_1/test.py::test_virtual_columns 2.99s teardown test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery 2.94s teardown test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path 2.90s call test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams 2.89s call test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1] 2.89s call test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0] 2.85s call test_materialized_mysql_database/test.py::test_table_table 2.79s call test_materialized_mysql_database/test.py::test_system_tables_table 2.79s call test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where 2.77s call test_materialized_mysql_database/test.py::test_system_parts_table 2.74s teardown test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 2.56s call test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 2.51s call test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where 2.37s call test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1] 2.36s call test_quota/test.py::test_tracking_quota 2.33s call test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0] 2.19s call test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1] 2.13s teardown test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 2.07s call test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1] 2.02s call test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0] 2.01s call test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 2.00s call test_quota/test.py::test_query_inserts 1.72s call test_postgresql_replica_database_engine_1/test.py::test_quoting_2 1.67s call test_materialized_mysql_database/test.py::test_savepoint_query 1.59s teardown test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 1.58s call test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 1.53s call test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 1.52s call test_quota/test.py::test_quota_from_users_xml 1.48s call test_postgresql_replica_database_engine_1/test.py::test_virtual_columns 1.48s call test_quota/test.py::test_simpliest_quota 1.47s call test_materialized_mysql_database/test.py::test_table_with_indexes 1.42s call test_prometheus_endpoint/test.py::test_prometheus_endpoint 1.25s teardown test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 1.24s call test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints 1.23s call test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots 1.23s teardown test_postgresql_replica_database_engine_1/test.py::test_quoting_2 1.21s call test_postgresql_replica_database_engine_1/test.py::test_quoting_1 1.00s teardown test_postgresql_replica_database_engine_1/test.py::test_different_data_types 0.92s teardown test_postgresql_replica_database_engine_1/test.py::test_single_transaction 0.90s teardown test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots 0.88s teardown test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 0.87s teardown test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 0.82s teardown test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished 0.81s setup test_quota/test.py::test_dcl_introspection 0.80s setup test_quota/test.py::test_exceed_quota 0.80s teardown test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 0.79s setup test_quota/test.py::test_simpliest_quota 0.79s teardown test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 0.77s setup test_quota/test.py::test_consumption_of_show_privileges 0.77s setup test_quota/test.py::test_reload_users_xml_by_timer 0.76s call test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed 0.74s setup test_quota/test.py::test_consumption_of_show_tables 0.74s setup test_quota/test.py::test_quota_from_users_xml 0.74s teardown test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes 0.73s setup test_quota/test.py::test_query_inserts 0.73s teardown test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 0.73s teardown test_postgresql_replica_database_engine_1/test.py::test_replicating_dml 0.73s teardown test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.72s setup test_quota/test.py::test_tracking_quota 0.72s setup test_quota/test.py::test_consumption_of_show_clusters 0.72s teardown test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 0.72s call test_postgresql_protocol/test.py::test_java_client 0.68s setup test_quota/test.py::test_dcl_management 0.67s setup test_quota/test.py::test_add_remove_quota 0.67s setup test_quota/test.py::test_consumption_of_show_databases 0.65s call test_merge_table_over_distributed/test.py::test_filtering 0.64s call test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node 0.64s call test_postgresql_protocol/test.py::test_psql_client 0.61s setup test_quota/test.py::test_consumption_of_show_processlist 0.58s teardown test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished 0.58s call test_quota/test.py::test_consumption_of_show_clusters 0.55s call test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed 0.53s call test_quota/test.py::test_consumption_of_show_databases 0.53s call test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster 0.43s call test_merge_table_over_distributed/test.py::test_global_in 0.38s call test_quota/test.py::test_consumption_of_show_tables 0.36s call test_quota/test.py::test_consumption_of_show_privileges 0.34s call test_quota/test.py::test_consumption_of_show_processlist 0.08s call test_postgresql_protocol/test.py::test_python_client 0.06s setup test_move_partition_to_volume_async/test.py::test_sync_alter_move 0.01s teardown test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1] 0.00s setup test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0] 0.00s teardown test_materialized_mysql_database/test.py::test_materialized_with_enum 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0] 0.00s setup test_materialized_mysql_database/test.py::test_utf8mb4 0.00s setup test_materialized_mysql_database/test.py::test_multi_table_update 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0] 0.00s teardown test_materialized_mysql_database/test.py::test_table_overrides 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0] 0.00s setup test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1] 0.00s setup test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 0.00s teardown test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_single_transaction 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0] 0.00s teardown test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_replicating_dml 0.00s setup test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_virtual_columns 0.00s setup test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 0.00s setup test_materialized_mysql_database/test.py::test_table_overrides 0.00s setup test_materialized_mysql_database/test.py::test_system_parts_table 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0] 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0] 0.00s teardown test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0] 0.00s setup test_materialized_mysql_database/test.py::test_table_table 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 0.00s setup test_materialized_mysql_database/test.py::test_text_blob_charset 0.00s setup test_materialized_mysql_database/test.py::test_select_without_columns_5_7 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1] 0.00s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams 0.00s setup test_materialized_mysql_database/test.py::test_table_with_indexes 0.00s setup test_materialized_mysql_database/test.py::test_system_tables_table 0.00s teardown test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1] 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1] 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1] 0.00s setup test_materialized_mysql_database/test.py::test_select_without_columns_8_0 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0] 0.00s setup test_materialized_mysql_database/test.py::test_savepoint_query 0.00s teardown test_postgresql_protocol/test.py::test_java_client 0.00s setup test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1] 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1] 0.00s teardown test_materialized_mysql_database/test.py::test_system_tables_table 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 0.00s teardown test_postgresql_protocol/test.py::test_psql_client 0.00s setup test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0] 0.00s setup test_postgresql_protocol/test.py::test_python_client 0.00s teardown test_materialized_mysql_database/test.py::test_select_without_columns_5_7 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_quoting_2 0.00s setup test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1] 0.00s teardown test_quota/test.py::test_add_remove_interval 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1] 0.00s teardown test_quota/test.py::test_reload_users_xml_by_timer 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 0.00s teardown test_materialized_mysql_database/test.py::test_select_without_columns_8_0 0.00s teardown test_materialized_mysql_database/test.py::test_table_with_indexes 0.00s teardown test_move_partition_to_volume_async/test.py::test_async_alter_move 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots 0.00s teardown test_materialized_mysql_database/test.py::test_savepoint_query 0.00s setup test_materialized_mysql_database/test.py::test_named_collections 0.00s setup test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.00s setup test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0] 0.00s setup test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_different_data_types 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1] 0.00s teardown test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1] 0.00s setup test_postgresql_protocol/test.py::test_psql_client 0.00s teardown test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1] 0.00s teardown test_materialized_mysql_database/test.py::test_named_collections 0.00s teardown test_materialized_mysql_database/test.py::test_table_table 0.00s teardown test_materialized_mysql_database/test.py::test_system_parts_table 0.00s teardown test_quota/test.py::test_consumption_of_show_tables 0.00s teardown test_materialized_mysql_database/test.py::test_multi_table_update 0.00s setup test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1] 0.00s teardown test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 0.00s teardown test_materialized_mysql_database/test.py::test_text_blob_charset 0.00s setup test_materialized_mysql_database/test.py::test_materialized_with_column_comments 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1] 0.00s setup test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0] 0.00s setup test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1] 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1] 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0] 0.00s setup test_merge_table_over_distributed/test.py::test_global_in 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0] 0.00s teardown test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0] 0.00s teardown test_quota/test.py::test_dcl_management 0.00s teardown test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype 0.00s setup test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed 0.00s setup test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1] 0.00s setup test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 0.00s setup test_materialized_mysql_database/test.py::test_materialized_with_enum 0.00s teardown test_quota/test.py::test_quota_from_users_xml 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1] 0.00s teardown test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 0.00s teardown test_merge_table_over_distributed/test.py::test_filtering 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0] 0.00s teardown test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1] 0.00s teardown test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 0.00s setup test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where 0.00s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0] 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0] 0.00s teardown test_quota/test.py::test_exceed_quota 0.00s teardown test_quota/test.py::test_add_remove_quota 0.00s teardown test_quota/test.py::test_consumption_of_show_databases 0.00s teardown test_quota/test.py::test_consumption_of_show_privileges 0.00s teardown test_merge_table_over_distributed/test.py::test_global_in 0.00s teardown test_quota/test.py::test_consumption_of_show_processlist 0.00s teardown test_quota/test.py::test_dcl_introspection 0.00s teardown test_materialized_mysql_database/test.py::test_materialized_with_column_comments 0.00s teardown test_quota/test.py::test_consumption_of_show_clusters 0.00s teardown test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0] 0.00s teardown test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0] 0.00s teardown test_quota/test.py::test_query_inserts 0.00s teardown test_quota/test.py::test_simpliest_quota 0.00s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where =========================== short test summary info ============================ FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication FAILED test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value FAILED test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart FAILED test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions FAILED test_postgresql_replica_database_engine_1/test.py::test_different_data_types FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables FAILED test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries FAILED test_postgresql_replica_database_engine_1/test.py::test_multiple_databases FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_1 - he... FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_2 - he... FAILED test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index FAILED test_postgresql_replica_database_engine_1/test.py::test_replicating_dml FAILED test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished FAILED test_postgresql_replica_database_engine_1/test.py::test_single_transaction FAILED test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes FAILED test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots FAILED test_postgresql_replica_database_engine_1/test.py::test_virtual_columns PASSED test_merge_table_over_distributed/test.py::test_filtering PASSED test_merge_table_over_distributed/test.py::test_global_in PASSED test_postgresql_protocol/test.py::test_java_client PASSED test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed PASSED test_postgresql_protocol/test.py::test_psql_client PASSED test_postgresql_protocol/test.py::test_python_client PASSED test_quota/test.py::test_add_remove_interval PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed PASSED test_move_partition_to_volume_async/test.py::test_async_alter_move PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0] PASSED test_quota/test.py::test_add_remove_quota PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1] PASSED test_move_partition_to_volume_async/test.py::test_sync_alter_move PASSED test_quota/test.py::test_consumption_of_show_clusters PASSED test_quota/test.py::test_consumption_of_show_databases PASSED test_quota/test.py::test_consumption_of_show_privileges PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0] PASSED test_quota/test.py::test_consumption_of_show_processlist PASSED test_quota/test.py::test_consumption_of_show_tables PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0] PASSED test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1] PASSED test_quota/test.py::test_dcl_introspection PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where PASSED test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1] PASSED test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list PASSED test_quota/test.py::test_dcl_management PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0] PASSED test_quota/test.py::test_exceed_quota PASSED test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype PASSED test_quota/test.py::test_query_inserts PASSED test_materialized_mysql_database/test.py::test_materialized_with_column_comments PASSED test_quota/test.py::test_quota_from_users_xml PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1] PASSED test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path PASSED test_quota/test.py::test_reload_users_xml_by_timer PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0] PASSED test_prometheus_endpoint/test.py::test_prometheus_endpoint PASSED test_quota/test.py::test_simpliest_quota PASSED test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery PASSED test_quota/test.py::test_tracking_quota PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0] PASSED test_materialized_mysql_database/test.py::test_materialized_with_enum PASSED test_overcommit_tracker/test.py::test_user_overcommit PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1] PASSED test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished PASSED test_materialized_mysql_database/test.py::test_multi_table_update PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1] PASSED test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster PASSED test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1] PASSED test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0] PASSED test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1] PASSED test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 PASSED test_on_cluster_timeouts/test.py::test_long_query PASSED test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 PASSED test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0] PASSED test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1] PASSED test_materialized_mysql_database/test.py::test_named_collections PASSED test_materialized_mysql_database/test.py::test_savepoint_query PASSED test_materialized_mysql_database/test.py::test_select_without_columns_5_7 PASSED test_materialized_mysql_database/test.py::test_select_without_columns_8_0 PASSED test_materialized_mysql_database/test.py::test_system_parts_table PASSED test_materialized_mysql_database/test.py::test_system_tables_table PASSED test_materialized_mysql_database/test.py::test_table_overrides PASSED test_materialized_mysql_database/test.py::test_table_table PASSED test_materialized_mysql_database/test.py::test_table_with_indexes PASSED test_materialized_mysql_database/test.py::test_text_blob_charset PASSED test_materialized_mysql_database/test.py::test_utf8mb4 ============ 19 failed, 81 passed, 3 warnings in 195.15s (0:03:15) ============= Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 437, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_avp9uj --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=2993bc2bf171 -e DOCKER_KERBERIZED_HADOOP_TAG=ce74919e88f5 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=a2d3dc777d0c -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_materialized_mysql_database/test.py::test_materialized_database_settings_materialized_mysql_tables_list test_materialized_mysql_database/test.py::test_materialized_database_support_all_kinds_of_mysql_datatype test_materialized_mysql_database/test.py::test_materialized_with_column_comments test_materialized_mysql_database/test.py::test_materialized_with_enum test_materialized_mysql_database/test.py::test_multi_table_update test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_5_7 test_materialized_mysql_database/test.py::test_mysql_kill_sync_thread_restore_8_0 test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_5_7 test_materialized_mysql_database/test.py::test_mysql_killed_while_insert_8_0 'test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node0]' 'test_materialized_mysql_database/test.py::test_mysql_settings[clickhouse_node1]' test_materialized_mysql_database/test.py::test_named_collections test_materialized_mysql_database/test.py::test_savepoint_query test_materialized_mysql_database/test.py::test_select_without_columns_5_7 test_materialized_mysql_database/test.py::test_select_without_columns_8_0 test_materialized_mysql_database/test.py::test_system_parts_table test_materialized_mysql_database/test.py::test_system_tables_table test_materialized_mysql_database/test.py::test_table_overrides test_materialized_mysql_database/test.py::test_table_table test_materialized_mysql_database/test.py::test_table_with_indexes test_materialized_mysql_database/test.py::test_text_blob_charset test_materialized_mysql_database/test.py::test_utf8mb4 test_materialized_view_restart_server/test.py::test_materialized_view_with_subquery test_merge_table_over_distributed/test.py::test_filtering test_merge_table_over_distributed/test.py::test_global_in test_merge_table_over_distributed/test.py::test_select_table_name_from_merge_over_distributed test_merge_tree_settings_constraints/test.py::test_merge_tree_settings_constraints test_modify_engine_on_restart/test_unusual_path.py::test_modify_engine_on_restart_with_unusual_path test_move_partition_to_volume_async/test.py::test_async_alter_move test_move_partition_to_volume_async/test.py::test_sync_alter_move test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree test_on_cluster_timeouts/test.py::test_long_query test_overcommit_tracker/test.py::test_user_overcommit 'test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[0]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_error_on_unavailable_shards[1]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[0]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_no_unavailable_shards[1]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[0]' 'test_parallel_replicas_distributed_skip_shards/test.py::test_skip_unavailable_shards[1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-10-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-2-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-3-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_multiple_shards_multiple_replicas-4-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-10-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-2-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-3-1]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-0]' 'test_parallel_replicas_over_distributed/test.py::test_parallel_replicas_over_distributed[test_single_shard_multiple_replicas-4-1]' test_passing_max_partitions_to_read_remotely/test.py::test_default_database_on_cluster test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed test_postgresql_protocol/test.py::test_java_client test_postgresql_protocol/test.py::test_psql_client test_postgresql_protocol/test.py::test_python_client test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index test_postgresql_replica_database_engine_1/test.py::test_replicating_dml test_postgresql_replica_database_engine_1/test.py::test_restart_server_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_single_transaction test_postgresql_replica_database_engine_1/test.py::test_table_schema_changes test_postgresql_replica_database_engine_1/test.py::test_user_managed_slots test_postgresql_replica_database_engine_1/test.py::test_virtual_columns test_prometheus_endpoint/test.py::test_prometheus_endpoint test_quota/test.py::test_add_remove_interval test_quota/test.py::test_add_remove_quota test_quota/test.py::test_consumption_of_show_clusters test_quota/test.py::test_consumption_of_show_databases test_quota/test.py::test_consumption_of_show_privileges test_quota/test.py::test_consumption_of_show_processlist test_quota/test.py::test_consumption_of_show_tables test_quota/test.py::test_dcl_introspection test_quota/test.py::test_dcl_management test_quota/test.py::test_exceed_quota test_quota/test.py::test_query_inserts test_quota/test.py::test_quota_from_users_xml test_quota/test.py::test_reload_users_xml_by_timer test_quota/test.py::test_simpliest_quota test_quota/test.py::test_tracking_quota -vvv" altinityinfra/integration-tests-runner:9d492c2eec24 ' returned non-zero exit status 1.